Annals of Internal Medicine
● American College of Physicians
All preprints, ranked by how well they match Annals of Internal Medicine's content profile, based on 27 papers previously published here. The average preprint has a 0.03% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Volesky-Avellaneda, K. D.; Wang, J. H.; Pfeiffer, R. M.; Castenson, D.; Israni, A. K.; Miller, J. M.; Musgrove, D.; Shiels, M. S.; Snyder, J. J.; Yu, K. J.; Engels, E. A.
Show abstract
Underlying medical conditions, graft failure, and immunosuppression place solid organ transplant recipients ("recipients) at heighten risk of death. We report the underlying causes of death among adult US recipients and compare their mortality to the US general population. We obtained causes of death by linking a sample of deaths from the US organ transplant registry to the National Death Index and weighted the linked deaths to represent all deaths among recipients aged [≥]18 years during 1999-2019. To compare mortality to the US general population, we calculated standardized mortality ratios (SMRs). Among 496,467 recipients, 153,491 deaths occurred, of which 99,373 were NDI-linked. Leading causes of death were heart disease (16.9% of deaths), graft failure (14.9%), and cancer (14.4%). Compared to the US general population, recipients had 4.02 times the risk of death, and mortality was elevated for all 16 causes analyzed (e.g., 3.32-fold for heart disease and 2.08-fold for cancer) except dementia/Alzheimers disease. During 2015-2019, overall mortality was elevated 3.08-fold, with lung recipients experiencing the highest elevation (SMR=7.91), followed by heart (3.20), liver (2.86), and kidney (2.81) recipients. Although mortality improved over time, US recipients continue to face substantially elevated mortality, both overall and for common causes of death.
Shaw, R.; Haque, A.; Fitzsimons, J.; Hamidi, A.; O'Connor, T.; Roloff, G.; Bemiss, B.; Kallwitz, E.; Hagen, P.; Berg, S.
Show abstract
Solid organ transplant is a curative treatment for end organ disease. However, the immunosuppressive therapy required to prevent graft rejection increases the likelihood of developing a subsequent malignancy. This retrospective cohort study from a multi-center academic hospital system investigated solid organ transplantation, immunosuppression, and the risk of subsequent malignancy. Of the 5,591 patients and 6,142 transplanted organs studied, there were 517 subsequent malignancies identified. Skin cancer was the most common type of malignancy to be diagnosed, whereas liver cancer was the first malignancy to present at a median time of one-year post-transplant. Subsequent malignancy was proportionally more often diagnosed in non-Hispanic White transplant recipients compared to other racial groups. Heart and lung transplant recipients had relatively higher rates of subsequent malignancy than liver and kidney transplant recipients, but this finding was not significant upon adjusting for immunosuppressive medications. Multivariate cox proportional hazard analysis and random forest variable importance calculations identified statistically significant correlations with sirolimus and azathioprine and high rates of malignancy after transplant, while tacrolimus was associated with low rates of post-transplant neoplasia.
Short Fabic, M.; Choi, Y.; Bishai, D.
Show abstract
Using COVID-19 Case Surveillance Public Use Data by the Centers for Disease Control and Prevention, we estimate monthly age-adjusted case fatality rates (CFR) for four major groups: non-Hispanic (NH) whites, NH Blacks, NH Asians, and Hispanics. Available data show that CFRs across race/ethnic groups have become more equal over time. Nevertheless, racial and ethnic disparities persist. NH whites consistently experience lower CFRs; NH Blacks generally experience higher case fatality among younger patients; and NH Asians generally experience higher case fatality among older patients. Age-adjusted CFRs reveal dramatically different racial and ethnic disparities that are hidden by crude CFRs. Such adjustment is imperative for understanding COVID-19s toll.
Wetzler, H. P.; Wetzler, E. A.
Show abstract
BackgroundIt has been suggested that many of those who died from COVID-19 were older, had more comorbidities, and would have died within a short period anyway. We estimated the number and percent of excess deaths due to COVID-19 In April 2020 in the United States, New York City, and Michigan. MethodsFor each locale we calculated attributable fractions in the exposed comparing observed COVID-19 deaths and expected deaths. In addition, we estimated the number of months it would take for the excess deaths to occur without the virus and the proportions of the populations that were infected leading to the April deaths. We compared the excess deaths from the attributable fraction method to those obtained by comparing weekly deaths in 2019 and 2020. ResultsUsing an assumed infection fatality rate of 1%, the percentages of excess deaths were 95%, 97%, and 95% in the US, NYC, and MI equivalent to 54,560; 14,951; and 3,338 deaths, respectively. Absent the virus these deaths would have occurred over 21.0, 29.2, and 18.4 months in the respective locations. An estimated 1.7% of the US population was infected between March 13 and April 10, 2020. Nearly 19% were infected in NYC. ConclusionsOver 75% of COVID-19 deaths in April 2020 were excess deaths meaning they would not have occurred in April without SARS-CoV-2 but would have been spread out over the ensuing 18 to 29 months. Confirmed cases in the US under-report the actual number of infections by at least an order of magnitude. Excess death numbers calculated using the attributable fraction in the exposed are similar to those obtained from weekly mortality reports.
VA COVID-19 Observational Research Collaboratory, ; Iwashyna, T. J.
Show abstract
An essential precondition for successful "herd immunity" strategies for the control of SARS-CoV-2 is that reinfection with the virus be relatively rare. Some infection control, prioritization, and testing strategies for SARS-CoV-2 were designed on the premise of rare re-infection. The U.S. Veterans Health Administration (VHA) includes 171 medical centers and 1,112 outpatient sites of care, with widespread SARS-CoV-2 test availability. We used the VHAs unified, longitudinal electronic health record to measure the frequency of re-infection with SARS-CoV-2 at least 90 days after initial diagnosis We identified 308,051 initial cases of SARS-CoV-2 infection diagnosed in VHA between March 2020 and January 2022; 58,456 (19.0%) were associated with VHA hospitalizations. A second PCR-positive test occurred in 9,203 patients in VA at least 90-days after their first positive test in VHA; 1,562 (17.0%) were associated with VHA hospitalizations. An additional 189 cases were identified as PCR-positive a third time at least 90-days after their second PCR-positive infection in VHA; 49 (25.9%) were associated with VHA hospitalizations. The absolute number of re-infections increased from less than 500 per month through November 2021, to over 4,000 per month in January 2022.
Ahn, D. J.; Nakayama, T.; Attia, A. M.; White, M.; Eap, D.; Narang, N.; Khush, K. K.; Parker, W. F.; Sasaki, K.
Show abstract
BackgroundIn the United States heart allocation system, when transplant centers submit applications for status exceptions to increase waitlist priority, patients obtain the requested status upgrades immediately while their applications are sent to the regional review boards (RRBs) and reviewed retrospectively. How much time elapses between obtaining a status upgrade through exception and application receipt by the RRBs and how often transplants occur during this period is unknown. MethodsUsing the Scientific Registry of Transplant Recipients (SRTR), we identified all adult heart transplant candidates listed between October 18, 2018 and December 31, 2023 with submitted applications for status exceptions. We assessed 1) the amount of time elapsed between submission of exception applications and their receipt by the RRBs and 2) the rate of heart transplantation during this "travel" time, stratified by whether the applications were eventually approved or denied. Additionally, using complete match run data, we estimated how many listed patients were skipped by candidates who received transplants with exceptions that were ultimately denied. Results135 transplant centers submitted status exception requests on behalf of 8,269 adult candidates during the study period, of whom 608 (7.4%) received a denial at least once. The median time from obtaining higher priority statuses immediately via exceptions to application receipt by the RRBs was 3 days. 2,087 out of 8,269 (25.2%) patients received transplants before the RRBs even received their applications, with 115 (18.9%) among 608 with eventual denials and 1,972 (25.7%) among 7,661 with approvals. The cumulative incidence of heart transplantation before application receipt for eventual denials was 19.1% (95% CI [16.0%, 22.3%]) and that for approvals was 26.2% (95% CI [25.2%, 27.1%]) (p < 0.001) at 2 weeks. Based on match run data, the 115 patients who received transplants with denied exceptions bypassed more than seven thousand potential transplant recipients. ConclusionsMore than 25% of patients with status exception requests receive heart transplants before their applications are even received by their respective RRBs, let alone reviewed. This raises significant concerns about the efficacy and fairness of retrospective review of exception requests for the allocation of valuable donor hearts.
Adam, H.; Bermea, R. S.; Yang, M. Y.; Celi, L. A. G.; Ghassemi, M.
Show abstract
Background.There are known racial disparities in the organ transplant allocation system in the United States. However, prior work has yet to establish if transplant center decisions on offer acceptance--the final step in the allocation process-- contribute to these disparities. ObjectiveTo estimate racial differences in the acceptance of organ offers by transplant center physicians on behalf of their patients. DesignRetrospective cohort analysis using data from the Scientific Registry of Transplant Recipients (SRTR) on patients who received an offer for a heart, liver, or lung transplant between January 1, 2010 and December 31, 2020. SettingNationwide, waitlist-based. Patients32,268 heart transplant candidates, 102,823 liver candidates, and 25,780 lung candidates, all aged 18 or older. Measurements1) Association between offer acceptance and two race-based variables: candidate race and donor-candidate race match; 2) association between offer rejection and time to patient mortality. ResultsBlack race was associated with significantly lower odds of offer acceptance for livers (OR=0.93, CI: 0.88-0.98) and lungs (OR=0.80, CI: 0.73-0.87). Donor-candidate race match was associated with significantly higher odds of offer acceptance for hearts (OR=1.11, CI: 1.06-1.16), livers (OR=1.10, CI: 1.06-1.13), and lungs (OR=1.13, CI: 1.07-1.19). Rejecting an offer was associated with lower survival times for all three organs (heart hazard ratio=1.16, CI: 1.09-1.23; liver HR=1.74, CI: 1.66-1.82; lung HR=1.21, CI: 1.15-1.28). LimitationsOur study analyzed the observational SRTR dataset, which has known limitations. ConclusionOffer acceptance decisions are associated with inequity in the organ allocation system. Our findings demonstrate the additional barriers that Black patients face in accessing organ transplants and demonstrate the need for standardized practice, continuous distribution policies, and better organ procurement.
Javan, E. M.; Fox, S. J.; Meyers, L. A.
Show abstract
For each US county, we calculated the probability of an ongoing COVID-19 epidemic that may not yet be apparent. Based on confirmed cases as of April 15, 2020, COVID-19 is likely spreading in 86% of counties containing 97% of US population. Proactive measures before two cases are confirmed are prudent.
Goldberg, D. S.; Chyou, D.; Wulf, R.; Wadsworth, M.
Show abstract
CMS updated the Final Rule for OPO Conditions for Coverage on 11/20/2020 to include new OPO metrics, tiers of classification, and a new definition of a donor that included an individual with: a) [≥]1 organ transplanted; or b) pancreas procured for research or islet cell transplantation. We conducted a retrospective cohort study using data from the OPTN/ UNOS. The number of pancreata procured for research increased four-fold from 2020 to 2022, despite stable numbers of other organs procured for research. Of the 57 OPOs, 8 (14.0%) procured >100 pancreata for research in 2022, accounting for 1,548 (58.2%) pancreata research procurements. Of those 8 OPOs, seven would be at risk for possible (Tier 2; n=3) or definite (Tier 3; n=4) decertification based on CMSs 2022 interim report. Based on these data, two Tier 3 OPOs would be reclassified as Tier 2 and one as Tier 1; three Tier 2 OPOs would be reclassified as Tier 1; and three Tier 3s would be reclassified as borderline Tier 2. The pancreas research carveout may detrimentally impact the system of organ donation by failing to accurately measure OPO performance and flag (and decertify) underpeforming OPOs unless CMS revises its final rule.
Hasjim, B. J.; Azafar, G.; Lee, F. G.; Diwan, T. S.; Raju, S.; Gross, J. A.; Sidhu, A.; Ichii, H.; Krishnan, R. G.; Mamdani, M.; Sharma, D.; Bhat, M.
Show abstract
ImportanceTransplantation is one of the few areas in medicine where the definitive treatment is rationed. Subjective decision-making pose challenges towards the transplant selection process. It has been proposed that large language models (LLMs) as artificial intelligent (AI) agents could provide objectivity in decision-making to solve complex problems. ObjectiveTo examine the performance of a multidisciplinary selection committee of AI agents (AI-SC) as a proof-of-concept towards objectivity in the liver transplant (LT) selection process. DesignThe AI-SC consisted of four LLMs: transplant hepatologist, transplant surgeon, cardiologist, and social worker. Zero-shot prompting with chain-of thought was used. Decisions were made based on clinicodemographic characteristics at time of waitlisting and LT. SettingNational LT cohort. ParticipantsAdult patients receiving deceased donor LT from 2004-2023 were extracted from the Scientific Registry of Transplant Recipients (SRTR) and clinical vignettes were generated. Standard absolute contraindications to LT were randomly assigned to a subset of patients to expose the AI-SC to cases of patients declined for LT. ExposuresClinicodemographic characteristics at waitlisting and transplantation. Main Outcomes and MeasuresThe AI-SCs accuracy with either: 1) listing candidates if LT would offer a 6-month or 1-year survival benefit or 2) declining candidates if contraindications to LT are present or if LT would not offer those survival benefits. ResultsOf 8,412 patients, 83.6% were waitlisted and 16.4% had contraindications to LT. The AI-SC was able to accurately identify contraindications to LT (accuracy: 98.2%, 95%CI 97.9%-98.4%), predict 6-month (94.9%, 95%CI 94.4%-95.3%) and 1-year (92.0%, 95%CI 91.4%-92.6%) survival. HCC burden beyond Milan criteria was the most common reason for accepted patients who were declined by AI-SC (False Negative). Malignancy was the most common cause of death prior to 6-month or 1-year end points (False Positive). The AI-SC most frequently did not perceive a lack of social support or severe cardiopulmonary disease as barriers to LT. Conclusions and RelevanceLLMs can be leveraged to simulate the LT-SC meetings and provide accurate, objective insights on patients who may or may not benefit from LT. Lessons learned from this proof-of-concept are a provocative step towards making the LT selection process more equitable and objective. Key PointsO_ST_ABSQuestionC_ST_ABSCan a multidisciplinary selection committee of artificial intelligence-based agents (AI-SC) accurately select liver transplant (LT) candidates based on potential survival benefit and contraindications to LT? FindingsClinical vignettes were generated from 8,412 LT candidates from the Scientific Registry of Transplant Recipients (SRTR). Of these, 16.4% were randomly assigned standard absolute contraindications to LT. The AI-SC (GPT-4, OpenAI) reviewed and selected LT candidates with accuracies of 98.2% in identifying contraindications to LT, 94.9% in predicting 6-month survival benefit, and 92.0% in predicting 1-year survival benefit. MeaningMulti-agent models may be leveraged to provide guidance towards objective decision-making in transplant candidacy.
Ahn, D.; Attia, A.; Nakayama, T.; Narang, N.; Khush, K. K.; Parker, W. F.; Sasaki, K.
Show abstract
IntroductionAfter the 2018 allocation policy change, the rate of listings and transplants with durable LVADs has decreased significantly in favor of bridging patients from temporary mechanical circulatory support to heart transplant. The Organ Procurement and Transplantation Network (OPTN) recently approved a policy, to be implemented in September 2026, stipulating that patients supported by durable LVADs for 6 and 8 years will obtain statuses 3 and 2, respectively. MethodsUsing OPTN data, we identified all adult heart transplant candidates with a durable LVAD implanted between October 18, 2018 and May 31, 2025. We estimated the cumulative incidence of status upgrades and durable LVAD-related complications, treating transplantation and waitlist removal before experiencing complications as competing events. We also assessed how the composition of the adult heart transplant waitlist on June 1, 2025 would have changed based on the upcoming policy change. ResultsDuring the study period, 3,881 adult patients were listed for heart transplant with a durable LVAD. 3,182 (82.0%) of the durable LVADs were Abbott HeartMate 3, 568 (14.6%) were Medtronic Heartware HVAD, and 91 (2.3%) were Abbott HeartMate II. Transplant centers submitted a total of 6,924 justifications for status upgrades due to LVAD-related complications (6.3% status 1, 34.3% status 2, and 59.4% status 3) for 1,500 (38.6%) of these patients, with a median of 3 per patient. The cumulative incidence of complications or status upgrades was 38.6% [95% CI (37.1%, 40.2%)]. Nearly all of the 2,381 patients who did not experience any complication or status upgrade during listing were removed from the waitlist by 6 years. Had the upcoming OPTN policy change been implemented on June 1, 2025, the proportion of the waitlist that would have achieved higher priority status instantaneously was 0.06%. ConclusionsThe cumulative incidence of status upgrades and complications among heart transplant candidates with durable LVADs was nearly 40% within 6 years of device implantation. The upcoming OPTN policy to escalate patients to statuses 3 and 2 after 6 and 8 years of durable LVAD support, respectively, is unlikely to make a meaningful impact on waitlist priority status.
Ekanayake, C.; Husain, S. A.; Yu, M. E.; Adler, J.; Muppavarapu, C. S.; Schold, J.; Mohan, S.
Show abstract
Allocation out of sequence (AOOS) allows organ procurement organizations (OPOs) to bypass the standard match-run to expedite kidney placement and prevent nonuse. Inclusion of all AOOS attempts is vital when attempting to assess impact of AOOS on organ utility, including those attempts that do not lead to successful transplant. We assessed the frequency of AOOS documentation in discarded kidneys. Using Scientific Registry of Transplant Recipients (SRTR) Potential Transplant Recipient (PTR) offer data from 2021-2024, we identified match-runs with at least one discarded kidney. AOOS was defined according to Health Resources and Services Administration (HRSA) guidelines and match runs were stratified by kidney recovery and disposition patterns, focusing on 2024 when AOOS was well established. AOOS coding frequency was assessed within each group and across OPOs. In 2024, only 4.3% of all match-runs with at least one discarded kidney contained evidence of AOOS documentation. Across OPOs, AOOS-coded discards ranged from 0.0% to 17.1% (median 3.9%, IQR [2.7-7.6%]). AOOS documentation among discarded kidneys remains rare and inconsistent, suggesting major data-capture deficiencies when attempting to accurately assess AOOS efforts. Improved AOOS reporting is essential before future expedited allocation pathways can be effectively evaluated or implemented.
Tambur, A. R.; Gmeiner, M.; Manski, C. F.
Show abstract
Incomplete information on HLA allele typing is a persistent problem when analyzing the role of Human Leukocyte Antigen (HLA) in transplantation. To refine the predictions possible with partial knowledge of HLA typing, some researchers use HaploStats statistics on the frequencies of haplotypes within specified ethnic/national populations to impute complete HLA allele typing. We evaluated methods that use imputation to predict patient outcomes after organ transplantation, with focus on prediction of graft survival conditional on typing information of the donor and recipient. Logical arguments show that imputation yields no predictive power when predictions are conditioned on all observed HLA typing data. Computational experiments indicate that imputation does not have predictive power when applied to risk-assessment models that make predictions conditional on only part of the observable HLA data. We therefore caution against reliance on imputation to overcome incomplete measurement. We encourage high-resolution typing of HLA antigens to improve prediction of transplant outcomes and matching of donors with recipients. Similar considerations should likely apply in other clinical settings.
Velez-Bermudez, M.; Leyva, Y.; Puttarajappa, C.; Kalaria, A.; Zhu, Y.; Ng, Y.-H.; Unruh, M.; Boulware, L. E.; Tevar, A.; Dew, M. A.; Myaskovsky, L.
Show abstract
Background In the United States, streamlining the kidney transplantation (KT) evaluation process may reduce disparities and barriers to KT access. Prior work showed that the Kidney Transplant Fast Track (KTFT) program shortened this process and reduced racial disparities in waitlisting and overall KT. However, within a setting where evaluation-related structural barriers have been addressed, a comprehensive longitudinal evaluation incorporating sociocultural factors (e.g., medical mistrust, healthcare-related discrimination/racism) alongside race/ethnicity as prespecified predictors across multiple KT milestones, including KT type (living [LDKT] and deceased donor KT [DDKT]), has not been performed. MethodsIn this secondary analysis, data came from the KTFT study, a prospective KT candidate cohort. Participants were recruited before KT evaluation start (05/2015-06/2018), coinciding with baseline measure collection, then followed via medical record through 08/2022. We used hierarchically-adjusted Fine-Gray proportional hazards models in this exploratory analysis. ResultsAmong 1108 KT candidates (243 Black, 783 White, 82 Other), medical mistrust was associated with lower cumulative incidence of waitlisting, but no other sociocultural factors were associated with outcomes. Racial and ethnic differences emerged for KT type: Black participants had a greater cumulative incidence of DDKT, and participants categorized as Other race/ethnicity had a lower cumulative incidence of LDKT, relative to White participants. Conclusions Although KTFT reduced racial/ethnic disparities in waitlisting and overall KT receipt, we identified racial/ethnic differences in LDKT and DDKT. Medical mistrust was a significant barrier to waitlisting. Findings suggest that even when the KT evaluation process is streamlined, sociocultural factors and race/ethnicity may influence KT outcomes.
Hynes, D. M.; Niederhausen, M.; Chen, J.; Shahoumian, T.; Rowneki, M.; Hickok, A.; Shepherd-Banigan, M.; Hawkins, E. J.; Naylor, J.; Teo, A. R.; Govier, D. J.; Berry, K.; McCready, H. D.; Osborne, T. F.; Wong, E. S.; Hebert, P. L.; Smith, V. A.; Bowling, C. B.; Boyko, E. J.; Ioannou, G. N.; Iwashyna, T. J.; Maciejewski, M. L.; O'Hare, A. M.; Viglianti, E. M.; Bohnert, A.
Show abstract
ImportanceThe negative health-related effects of SARS-CoV-2 infection may include increased risk for self-directed violence. ObjectiveTo assess suicide attempts and other self-directed violence risk among US Veterans with a positive polymerase chain reaction (PCR) test for SARS-CoV-2 infection compared to matched uninfected Veterans. Design, Setting, and ParticipantsUsing a target trial emulation design supported by comprehensive electronic health records from the US Veterans Health Administration, Veterans who had a positive PCR test between March 1, 2020 and March 31, 2021 were matched with non-infected comparators. Monthly matching was anchored to first positive PCR test for each patient. Groups were followed for one-year thereafter. ExposurePositive SARS-CoV-2 PCR. Main Outcomes and MeasuresSuicide attempts and self-directed violence documented in electronic health records by a VHA provider. Hazard ratios (HR) for time to first suicide attempt and self-directed violence (separate models) for the infected versus comparator group were measured using Cox regression models. Analyses were performed for short-term (days 1-30), long-term (days 31-365) and one-year (days 1-365) and further stratified by age and prior self-directed-violence history. Sensitivity analyses included censoring to address comparators crossing over by later testing positive for SARS-CoV-2. ResultsAmong the 1,190,974 Veterans included, during the one-year period after the index date; 3,078 (0.258%) had a suicide attempt and 2,887 (0.242%) had self-directed violence. Regardless of follow-up duration, the HRs for suicide attempts and self-directed violence were higher for the infected group. For suicide attempts, short-term HR=2.54 (95% Confidence Interval [CI]: 2.05 to 3.15), long-term HR=1.30 (CI: 1.19 to 1.43) and one-year HR= 1.41 (CI: 1.30, 1.54). For self-directed violence, short-term HR=1.94 (CI: 1.51 to 2.49), long-term HR=1.32 (CI: 1.20 to 1.45), and one-year HR=1.38 (CI:1.26, 1.51). Conclusions and RelevanceIn matched cohorts, Veterans who had a positive SARS-CoV-2 PCR test had a higher risk of suicide attempt and self-directed violence that were greatest within the first 30 days and present for at least one year following. These findings highlight the importance of assessing patient experiences of suicide attempt and other forms of self-directed violence during different time periods post-infection to identify opportunities to augment prevention efforts and support those affected. Trial RegistrationNot applicable Key PointsO_ST_ABSQuestionC_ST_ABSWhat were the risks of suicide attempts and self-directed violence among US Veterans with SARS-CoV-2 infection compared to a matched cohort? FindingsIn this target trial emulation study of a nationwide observational cohort of 1,190,974 matched US Veterans in the Veterans Health Administration from 3/1/2020-3/31/2021, those with a confirmed PCR test for SARS-CoV-2 infection had increased risk of both suicide attempts and self-directed violence that was greatest within 30 days after infection and persisted over the following year. Over the year, those in the infected group had 1.40 times risk of a suicide attempt and 1.38 times risk of experiencing self-directed violence versus those in the comparison group. MeaningCOVID-19 survivors may require additional screening and prevention resources for suicide attempts and other forms of self-directed violence.
Dehn, J.; Logan, B.; Shaw, B. E.; Devine, S.; Ciurea, S. O.; Horowitz, M.; He, N.; Pusic, I.; Srour, S. A.; Arai, S.; Juckett, M.; Uberti, J.; Hill, L.; Vasu, S.; Hogan, W. J.; Hayes-Lattin, B.; Westervelt, P.; Bashey, A.; Farhadfar, N.; Grunwald, M. R.; Leifer, E.; Symons, H.; Saad, A.; Vogel, J.; Erickson, C.; Buck, K.; Lee, S. J.; Pidala, J.
Show abstract
ImportancePatients requiring allogeneic hematopoietic cell transplantation have variable likelihoods of identifying an 8/8 HLA-matched unrelated donor. A Search Prognosis calculator can estimate the likelihood. ObjectiveTo determine if using a search algorithm based on donor search prognosis can result in similar incidence of transplant between patients Very Likely (>90%) vs Very Unlikely (<10%) to have a matched unrelated donor. DesignThis interventional trial utilized a Search Prognosis-based biologic assignment algorithm to guide donor selection. Trial enrollment from June 13, 2019-May 13, 2022; analysis of data as of September 7, 2023 with median follow-up post-evaluability of 14.5 months. SettingsNational multi-center Blood and Marrow Transplantation Clinical Trials Network 1702 study of US participating transplant centers. ParticipantsAcute myeloid and lymphoid leukemias, myelodysplastic syndrome, Hodgkins and non-Hodgkins lymphomas, severe aplastic anemia, and sickle cell disease patients referred to participating transplant centers were invited to participate. 2225 patients were enrolled and 1751 were declared evaluable for this study. Patients were declared evaluable once it was determined no suitable HLA-matched related donor was available. InterventionPatients assigned to the Very Likely arm were to proceed with matched unrelated donor, while Very Unlikely were to utilize alternative donors. A third stratum, Less Likely ([~]25%) to find a matched unrelated donor, were observed under standard center practices, but were not part of the primary objective. Main OutcomeCumulative incidence of transplantation by Search Prognosis arm ResultsEvaluable patients included 1751 of which 413 (24%) were from racial/ethnic minorities. Search prognosis was 958 (55%) Very Likely, 517 (30%) Less Likely and 276 (16%) Very Unlikely. 1171 (67%) received HCT, 384 (22%) died without HCT, and 196 (11%) remained alive without HCT. Among the 1,234 patients, the adjusted cumulative incidence (95% CI) of HCT at 6-months was 59.8% (56.7-62.8) in the Very Likely group versus 52.3% (46.1-58.5) in the Very Unlikely (P=0.113). ConclusionsA prospective Search Prognosis-based algorithm can be effectively implemented in a national multicenter clinical trial. This approach resulted in rapid alternative donor identification and comparable rates of HCT in patients Very Likely and Very Unlikely to find a matched unrelated donor. Trial Registration: NCT#03904134
Young-Xu, Y.; Korves, C.; Zwain, G.; Satram, S.; Drysdale, M.; Reyes, C.; Cheng, M. M.; Epstein, L.; Marconi, V. C.; Ginde, A.
Show abstract
BackgroundData on effectiveness of sotrovimab preventing COVID-19-related hospitalization or mortality, particularly after the emergence of the Omicron variant, are limited. MethodDetermine the real-world clinical effectiveness of sotrovimab for prevention of 30-day COVID-19 related hospitalization or mortality using a retrospective cohort within the U.S. Department of Veterans Affairs (VA) healthcare system. Veterans aged [≥]18 years, diagnosed with COVID-19 between December 1, 2021, and April 4, 2022, were included. Sotrovimab recipients (n=2,816) were exactly matched to untreated controls (n=11,250) on date of diagnosis, vaccination status, and region. The primary outcome was COVID-19-related hospitalization or all-cause mortality within 30 days from diagnosis. Cox proportional hazards modeling estimated the hazard ratios (HR) and 95% Confidence Interval (CI) for the association between receipt of sotrovimab and outcomes. ResultsDuring BA.1 dominance, compared to matched controls, sotrovimab-treated patients had a 70% lower risk hospitalization within 30 days or mortality (HR 0.30; 95%CI, 0.23-0.40), a 66% lower risk of 30-day hospitalization (HR 0.34; 95%CI, 0.25-0.46), and a 77% lower risk of 30-day all-cause mortality (HR 0.23; 95%CI, 0.14-0.38). During BA.2 dominance sotrovimab-treated patients had a 71% (HR .29; 95%CI, 0.08-0.98) lower risk of 30-day COVID-19-related-hospitalization, emergency, or urgent care. Limitations include confounding by indication. ConclusionsUsing national real-world data from high risk and predominantly vaccinated Veterans, administration of sotrovimab, compared with no treatment, was associated with reduced risk of 30-day COVID-19-related hospitalization or all-cause mortality during the Omicron BA.1 period and reduced risk of progression to severe COVID-19 during the BA.2 dominant period. SummaryExamination of national real-world evidence demonstrates sotrovimab is effective in preventing at risk positive COVID-19 cases from progressing to severe SARS-CoV-2 infections compared to matched untreated cases during Delta and early Omicron variant waves in the U.S. Veteran population.
Klaassen, F.; Swartwood, N.; Chitwood, M. H.; Lopes, R.; Haraguchi, M.; Salomon, J. A.; Cohen, T.; Menzies, N. A.
Show abstract
IntroductionEffective immune protection against SARS-CoV-2 infection and severe COVID-19 disease continues to change due to viral evolution and waning immunity. We estimated population-level immunity to SARS-CoV-2 for each of the fifty United States (U.S.) and the District of Columbia from January 2020 through December 2023. MethodsWe updated a model of SARS-CoV-2 infections to align with the latest evidence on SARS-CoV-2 natural history and waning of immunity, and to integrate various data sources available throughout the pandemic. We used this model to produce population estimates of effective protection against SARS-CoV-2 infection and severe COVID-19 disease. ResultsOn December 30, 2023, 99.9% of the U.S. population had experienced immunological exposure to SARS-CoV-2 through infection and/or vaccination, with 99.4% (95% credible interval (CrI): 92.4-100%) having had at least one SARS-CoV-2 infection. Despite this high exposure, the average population-level protection against infection was 53.6% (95% CrI: 38.7-71.5%). Population-level protection against severe disease was 82.6% (95% CrI: 71.5-91.7%). DiscussionA new wave of SARS-CoV-2 infections and COVID-19-associated hospitalizations began near the end of 2023, with the introduction of the JN.1 variant. This upturn suggests that the U.S. population remains at risk of SARS-CoV-2 infection and severe COVID-19 disease despite the high level of cumulative exposure in the United States. This decline in effective protection is likely due to both waning and continued viral evolution.
Malamon, J. S.; Bashain, E.; Cain, M. T.; Bhagwandin, B.; Kaplan, B.; Hoffman, J. R. H.
Show abstract
KEY POINTSO_ST_ABSQuestionC_ST_ABSDoes the current national six-status heart allocation system rank waitlisted transplant patients based on medical urgency? Can this heart allocation systems prognostic performance be improved? FindingsThe six-status system is poorly calibrated, lacks sufficient statistical discrimination, and underestimates risk in the highest-risk patients, or those with an observed six-month mortality probability greater than 2%. By combining the current status system with six additional patient characteristics (previous transplant, ventilation, mean pulmonary capillary wedge pressure, willingness to accept a donor after cardiac death, diabetes status, and most recent creatinine), we correctly predicted greater than 80% of six-month waitlist mortalities in 7,706 study participants. MeaningThis study challenges the safety and efficacy of the current national heart allocation system. ImportanceIn December 2016, the Organ Procurement and Transplantation Network (OPTN) approved a bylaw that restructured the national heart allocation policy from a three-status to a six-status system. This new allocation system, which aimed to assign the highest priority to the patients with the highest mortality risk, went into effect on October 18, 2018. Since then, studies have identified limitations with the current system. However, no changes have been made to improve the national heart allocation system. ObjectiveGiven the clear importance and impact of ranking patients based on medical urgency, we carefully evaluated the six-status heart allocation system to determine its correlation with observed mortality, or calibration, and its ability to predict six-month patient mortality risk and waitlist survival. We identified six additional patient characteristics associated with waitlist mortality and combined them with the six-status score to significantly improve the current allocation systems ability to predict six-month waitlist mortality. DesignA retrospective, secondary analysis of the Scientific Registry of Transplant Recipients (SRTR) database of heart transplant candidates and recipients waitlisted from October 18, 2018, to December 31, 2024. SettingThe United States ParticipantsSingle-organ heart transplant candidates, 18 years of age and older who were placed on the waitlist (N = 19,275). Patients listed multi-organ transplantation were excluded. ExposuresAll-cause waitlist mortality Main Outcomes and MeasuresThe primary outcome of this study was the validation of the calibration and prognostic performance of the current heart allocation system. The secondary outcome is a simple model that greatly improves upon the current systems ability to accurately (>80%) predict waitlisted patient mortality. ResultsWith a mean calibration slope of 0.94 (0.66, 1.21) and an area under the receiver operating curve of 0.71 (0.47, 0.87), the current allocation system is poorly calibrated, has only moderate statistical discrimination, and underestimates patient risk in the most critically ill patients. Hazard and time-series regression analysis confirmed that the six-status system does not adequately rank patients based on medical urgency. Our combined model demonstrates that the national allocation system can be improved. Conclusions and RelevanceWhile the current heart distribution system accounts for some patient risk factors, a more objective and accurate model is needed to achieve the OPTNs strategic objective to more reliably model and predict patient risk and survival likelihood. Our model more accurately predicts patient waitlist mortality and will better inform waitlist management and improve waitlist survival by prioritizing medical urgency.
Massie, A.; Yan, L.; Xue, R.; Stewart, D. E.; Husain, S. A.; Levan, M. L.; Gentry, S.; Lonze, B. E.; Segev, D.
Show abstract
A substantial proportion of recovered deceased-donor (DD) kidneys go unused. Accumulated refusals by transplant centers during the offer process may signal nonuse risk, and quantifying this phenomenon could inform frameworks for rescue strategies or out-of-sequence (OOS) placement. Using OPTN data on adult DD kidneys offered for transplant in 2024, we empirically estimated the probability of nonuse as a function of accumulated refusal count (ARC). Kidneys transplanted OOS were excluded from analysis. Among recovered adult DD kidneys offered in-sequence, risk of nonuse exceeded 50% after ARC=6 for blood type O kidneys, ARC=4 for type A and type B, and after ARC=1 for type AB. Risk exceeded 80% after ARC=128 (type O), ARC=55 (type A), ARC=50 (type B), and ARC=14 (type AB), and exceeded 90% after 980, 414, 278, and 41 refusals, respectively. The C-statistic of the ARC by blood type ranged from 0.896 to 0.933. ARC thresholds offer a pragmatic trigger for rescue allocation, incorporating center perception of kidney quality not easily captured in standard metrics. A policy allowing OPOs to offer kidneys OOS or deploy alternative rescue strategies once a certain ARC threshold is reached may improve utilization of hard-to-place donor kidneys while keeping easier-to-place kidneys in-sequence.